Asynchronous Newton-Raphson Consensus for Distributed Convex Optimization !
نویسندگان
چکیده
We consider the distributed unconstrained minimization of separable convex cost functions, where the global cost is given by the sum of several local and private costs, each associated to a specific agent of a given communication network. We specifically address an asynchronous distributed optimization technique called Newton-Raphson Consensus. Beside having low computational complexity, low communication requirements and being interpretable as a distributed Newton-Raphson algorithm, the technique has also the beneficial properties of requiring very little coordination and naturally supporting time-varying topologies. In this work we analytically prove that under some assumptions it shows either local or global convergence properties, and corroborate this result by the means of numerical simulations.
منابع مشابه
Asynchronous Newton-Raphson Consensus for Robust Distributed Convex Optimization
A general trend in the development of distributed 1 convex optimization procedures is to robustify existing algo2 rithms so that they can tolerate the characteristics and condi3 tions of communications among real devices. This manuscript 4 follows this tendency by robustifying a promising distributed 5 convex optimization procedure known as Newton-Raphson 6 consensus. More specifically, we modi...
متن کاملNewton - Raphson Consensus 1 for Distributed Convex Optimization
5 We address the problem of distributed unconstrained convex optimization under separability assumptions, i.e., the framework 6 where a network of agents, each endowed with a local private multidimensional convex cost and subject to communication constraints, 7 wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average 8 co...
متن کاملDistributed Newton Methods for Strictly Convex Consensus Optimization Problems in Multi-Agent Networks
Various distributed optimization methods have been developed for consensus optimization problems in multi-agent networks. Most of these methods only use gradient or subgradient information of the objective functions, which suffer from slow convergence rate. Recently, a distributed Newton method whose appeal stems from the use of second-order information and its fast convergence rate has been de...
متن کاملEfficient algorithms for online convex optimization and their applications
In this thesis we study algorithms for online convex optimization and their relation to approximate optimization. In the first part, we propose a new algorithm for a general online optimization framework called online convex optimization. Whereas previous efficient algorithms are mostly gradient-descent based, the new algorithm is inspired by the Newton-Raphson method for convex optimization, a...
متن کاملA Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization
Many machine learning models, including those with non-smooth regularizers, can be formulated as consensus optimization problems, which can be solved by the alternating direction method of multipliers (ADMM). Many recent efforts have been made to develop asynchronous distributed ADMM to handle large amounts of training data. However, all existing asynchronous distributed ADMM methods are based ...
متن کامل